Using ChatGPT for OSINT can be a powerful tool — but this and other generative AI require continuous education and training to be utilized successfully — and ethically.

This past year alone, generative AI has made incredible advancements in the field of investigative research. For OSINT analysts, utilizing AI can be a great tool for embracing new realms of knowledge, despite valid concerns of it replacing researcher jobs.

Because of the highly beneficial and continuously improving features involved with generative AI, OSINT analysts should skill-up on how to use it to improve their own intelligence building — while also being aware that bad actors are adopting at break-neck speed as well. It’s critical to understand the pros and cons of ChatGPT for OSINT while also learning how to ethically conduct research leveraging AI.

How ChatGPT for OSINT could future-proof the field

When thinking about the ways that AI can reshape the future of OSINT, it’s important to consider how the technologies that analysts use daily have evolved from the past. For instance, the evolution of search engines has changed the ways OSINT researchers access and gain information, and the validation of such information has become an equally significant aspect.

In fact, using generative AI like ChatGPT for OSINT is merely the latest in a long line of technological revolutions that have made OSINT what it is today. A quick trip down memory lane:

  • OSINT grew out of spycraft as it shifted away from clandestine methods of information gathering (think phone tapping and couriers ferrying secure communications) and toward scouring publicly available information like newspapers and files or databases. 
  • With the advent of the internet, vast amounts of information became accessible to anyone, and OSINT became increasingly useful not just to sophisticated government and law enforcement agencies, but to financial crime analysts, cyberthreat intelligence analysts and others who harnessed the power of internet access.
  • Search engines further advanced OSINT to better sift through the volumes of open-source information available online (knowing how to use advanced search and Google dorks made searches even better)
  • The amount of open-source information to be gleaned from social media alone has now earned its own OSINT sub-genre: SOCMINT

Viewed through the lens of history, hopefully OSINT analysts are seeing generative AI for what it is: the latest tool to help them improve the efficiency and quality of their work.

As generative AI becomes more sophisticated, it is essential for OSINT analysts to view it as a valuable tool in their analytical toolbox rather than a physical replacement of their abilities. As with all collected information, an OSINT analyst’s job — and value — is to verify it or to make clear how much it can be trusted to those acting on intelligence. 

Leveraging the power of AI can boost OSINT analysts’ efficiency, freeing up time for higher-level analysis and strategic decision-making. Producing better intelligence quicker will also help improve the importance of OSINT and potentially expand its use.

Bridging the gap: train and prompt ChatGPT for OSINT 

AI systems can be trained to enable nontechnical people to interact with data in complex ways. For one, large language models (LLMs) can be trained against your own data (or build your own LLM) to enable a simpler, more human interaction with data.

Prompt engineering is also a training process which allows AI systems to research higher-level analysis questions. Generative AI like ChatGPT for OSINT analysts can take highly detailed, customized instructions via prompts, leading to more accurate information and analysis. 

On a recent episode of NeedleStack, OSINT Combine Founder and CEO Chris Poulter discusses the two core ways to train ChatGPT for OSINT.

Because the application of generative AI is so extensive, companies are beginning to create more jobs for prompt engineers to bridge the gap between AI systems and custom data analysts. If you’re going it alone, this article has some great tips on prompt engineering to get you rolling, whatever the task at hand.

Moreover, there’s lots of helpful information out there to learn how to advance LLMs, including on these forums:

By training AI against diverse datasets, users can use LLMs that enable more human-like interactions, overall enhancing the user experience and potential applications of AI. 

Continuous training of AI systems is essential as these models still have a long way to go before they are able to fully replicate human intelligence and the way an OSINT analyst would interact with a data scientist, for example. 

However, AI does have the power to fool people with its outputs, which is why it’s ripe to be weaponized by bad actors.

Don’t let AI-generated misinformation trip up your OSINT research

Despite its extensive list of benefits, generative AI also has the potential to pose significant harm. Criminals are already creating LLMs of their own to carry out cyberattacks and scams. Deepfakes and other forms of misinformation have been utilized at both macro and micro levels, leading to reputational damage, the spread of illegal activities and manipulation of public opinion. AI generated images of child abuse is just one example of the increasingly dangerous and volatile outcomes of AI cropping up. 

Moreover, researchers have begun to express concerns about misinformation distribution on larger scales; analysts are questioning social media platforms’ capabilities to remove false claims and carefully draw the line between free speech and fake news during the next election. Social media platforms and closed message groups such as WhatsApp are at a heightened state of vulnerability due to the misinformation pipeline that exists on these forums.

Chris Poulter discusses AI’s ability to influence humans to varying degrees, and at times in harmful ways. However, Poulter stresses the importance of being able to challenge AI continuously in order to lessen its risks overtime. Watch the full episode here > 

Due to the increased wave of misinformation with the assistance of AI, OSINT researchers should continue to interrogate any collected information for possible falsehoods or misrepresentation, even if they appear truthful on the surface level. 

Interrogating AI in OSINT investigations

As LLMs like those used in ChatGPT may be based on false or misleading information, it’s important to challenge generative AI outputs to ensure you’re creating reliable intelligence. A few things to keep in mind when using ChatGPT for OSINT:

  • Where did the data come from?: There’s a major difference in the level of trust that should be given to LLMs that are working with broad data sets (e.g., the internet until 2021) vs. a relatively small data set you’ve provided for a particular investigation. By feeding LLMs small data sets and only analyzing that information, you can greatly limit the chance for AI to generate unfounded or bogus outputs.
  • What claims is the AI making?: “Hallucination” is the term used when ChatGPT makes up information in its output. As ChatGPT is trained to speak with authority, hallucinations can be hard to catch upon a quick read. As such, it’s important to carefully examine AI outputs for any claims made that will need to be corroborated and verified.
  • How could bias be affecting the AI’s outputs?: Like all technology, AI is vulnerable to inheriting the biases of the humans that designed it and the human-generated content the LLM was trained on. Think critically about how biases could skew AI outputs and when further investigation may be needed.

How analysts can keep up with emerging technologies

To stay ahead of advancing AI systems, OSINT analysts can proactively invest in educational AI programs and continuously engage in learning about the latest developments. Organizations can further foster knowledge-sharing and collaboration by establishing Centers for Excellence, dedicated hubs that facilitate the exchange of expertise, best practices and research findings among analysts. 

Challenging AI systems through complex discourse as one would with a human is another method for staying up to date on the technology. Having debates with AI models in order to get a better understanding of its argumentative and research skills will help OSINT analysts recognize how far its development has progressed. 

A great way to determine the reliability of ChatGPT for OSINT or other generative AI solutions is to ask it questions about yourself. Gauge how accurate the output is to understand how accurate outputs could be related to another investigation.

Creating ethics and security policies for using ChatGPT for OSINT

Establishing ethical policies for AI use is of utmost importance in today’s rapidly advancing world. While some companies may shy away from AI altogether due to concerns, it is crucial to recognize that AI is becoming increasingly pervasive in various industries, and extensively beneficial to fields such as OSINT research. Rather than avoiding it, companies should embrace the responsible use of AI by educating their employees and implementing clear guidelines for its ethical application. Ignoring ethical policies can be detrimental to an organization’s credibility, and the effectiveness of their OSINT research.

By prioritizing ethical considerations and keeping in mind the risks and limitations of generative AI, analysts can mobilize the possibilities of AI while safeguarding their integrity and ensuring a positive impact on society.

 

Learn more on fundamentals of OSINT research on our blog >

Tags
OSINT research